INTRINSIC MOTIVATION has garnered significant attention in recent years, empowering both living beings and robots to learn autonomously and cumulatively, even without extrinsic MOTIVATION (rewards from the environment). This concept, drawing inspiration from psychology and neuroscience, has opened up new avenues in artificial intelligence. Algorithmic architectures for INTRINSIC MOTIVATION facilitate exploration and the effective acquisition of motor skills in scenarios where environment rewards are sparse or absent. This is particularly relevant for many real-world problems, where large portions of the environment offer no explicit rewards. Consequently, INTRINSIC MOTIVATION holds not only theoretical significance for enhancing artificial intelligence algorithms, particularly in exploration tasks, but also practical implications for real-world or near-real-world applications. In this paper, we delve into the significance of INTRINSIC MOTIVATION, providing a brief overview of its origins in psychology. We then systematically categorize and examine research on INTRINSIC MOTIVATION in artificial intelligence. Additionally, we discuss the reinforcement learning method as a successful approach for incorporating INTRINSIC MOTIVATION. Finally, we explore the practical applications, limitations, and future INTRINSIC MOTIVATION research.